Simultaneous equations - Kumar's method

This page arose from an on-line discussion at LinkedIn.
An inventive, creative teacher called Kumar J has developed a novel way to solve simultaneous equations. Here's his original post:

Solve x+y=5, 5x-y=7. Plot (1,1) and marked as 5. plot (5,-1) as7. Connect those two points. This line cuts the x-axis at 3, so I will say 3x and the pattern of the number at the point is 5 and then 7 it means at the x-axis it has to be 6. So 3x=6 and x=2 extend the line to Y-axis it cuts at 1.5y if the pattern continues ( it will because it is a st. line) the value on the y-axis has to be 4.5 which makes 1.5y=4.5 so y =3. System is solved. Please give your comments on this.( The points are the coefficients of x and y and the point on the graph is not a point it is the constant continue the trend on both sides on the graph)

I really liked the originality of this and spent some time trying to test it with a wide range of examples.

To help me I created an interactive tool that followed his algorithm with a general pair of quadratics:

and


Here it is:


Have a play with the sliders to create a range of simultaneous equations to solve.
Obviously the solution to the equations is where the red and blue lines cross.
When Kumar's method works you'll see this intersection marked by a hollow green dot.
Can you make the dot disappear?
What do you notice about the alignment of Kumar's points (the two larger green blobs) when this happens?
If it helps, you can turn on the line between them by clicking the dot next to equation 15.

Now have a much closer look at the functions for s, t, u and v.

How do they relate to the description Kumar's gave of his algorithm?
How do they relate to the graph?
How do I arrive at the green hollow dot?

So after exploring this for a while where are we?
  1. Kumar has certainly found a novel way to solve simultaneous equations and it seems to work almost all the time.
  2. It's claimed to be more efficient for the sort of integer-coefficient problems found on timed exams, so it may confer a computational advantage in this situation.
  3. But it doesn't work in at least some examples when either a = p or b = q.
  4. We haven't proved it works in all other cases.

So let's set about proving it works:
1. The standard algebraic approach in the general case.
Kumar1.jpg
2. Solution by elimination
Kumar2.jpg
3. Kumar's method part 1 - determining the x-intercept and its associated value
Kumar3.jpg
4. Kumar's method part 2 - determining the y-intercept and its associated value
Kumar4.jpg
5. Kumar's method part 3 - using these values to find the solution and rearranging to match earlier expressions
Kumar5.jpg

So it turns out that we can prove Kumar's method gives the same answer as the standard algebraic approaches, but that it fails for a good reason when his points lie in either a vertical or horizontal line since under these circumstances they don't intersect one of the axes. This is coincident in the algebra with those occasions where we divide by zero.

For my part, I'm not convinced yet that any marginal improvement in calculation efficiency outweighs the obfuscation of what is being done in the algorithm to arrive at a solution. I do really appreciate Kumar's invention here though, and I'd love to know how he arrived at his algorithm. But for me, the real mathematical learning here is to be found in encouraging learners to do what we've done above: take a hypothesis, test it, look for counter-examples and develop proofs. And for the opportunity to do that I'd really like to thank Kumar for his idea.

Addendum:

With thanks to two further contributors (Ben Edwards and Arthur Gershon, PhD) to the LinkedIn discussion, there's another way to visualise this. Instead of assigning a "value" to Kumar's points, make it a third dimension.
This turns equation

into point (a,b,c) in 3D space with axes which I will call i, j, and k.

This 'projective space' maps equivalent equations such as

(where λ is a non-zero constant) into a series of aligned points radiating in a straight line from the origin.

That is to say if P (a,b,c) and Q (d,e,f) are related by OP = λOQ then the equations

and

represent the same line and a = λd, b = λe and c = λf.

This has the advantage of allowing us to replace (a,b,c) with

and (p,q,r) becomes


This has the huge advantage of reducing us back to working in 2D since the points (and of course the line between them) are both in the plane k = 1. So from here we drop the k coordinate.

We now represent the original problem of solving the simultaneous equations

by determining the line joining the corresponding points in the projective space


Next we find the equation of the line between them using the standard equation of the line between two points

which here is


Multiply both sides by

to give


Separating the LHS into two terms and multiplying by RHS denominator:




...expanding and creating a common denominator on both sides:


a fair bit of cancelling later:


and dividing to leave RHS = 1


a couple of tweaks to sort out signs gives


Now note that if we define

then we arrive at Arthur's claim that the equation of the line between these projective points in the form

corresponds to the solution to the original simultaneous equations, that is:


Please feel free to comment below...